182 research outputs found

    Fast Dynamic Graph Algorithms for Parameterized Problems

    Full text link
    Fully dynamic graph is a data structure that (1) supports edge insertions and deletions and (2) answers problem specific queries. The time complexity of (1) and (2) are referred to as the update time and the query time respectively. There are many researches on dynamic graphs whose update time and query time are o(G)o(|G|), that is, sublinear in the graph size. However, almost all such researches are for problems in P. In this paper, we investigate dynamic graphs for NP-hard problems exploiting the notion of fixed parameter tractability (FPT). We give dynamic graphs for Vertex Cover and Cluster Vertex Deletion parameterized by the solution size kk. These dynamic graphs achieve almost the best possible update time O(poly(k)logn)O(\mathrm{poly}(k)\log n) and the query time O(f(poly(k),k))O(f(\mathrm{poly}(k),k)), where f(n,k)f(n,k) is the time complexity of any static graph algorithm for the problems. We obtain these results by dynamically maintaining an approximate solution which can be used to construct a small problem kernel. Exploiting the dynamic graph for Cluster Vertex Deletion, as a corollary, we obtain a quasilinear-time (polynomial) kernelization algorithm for Cluster Vertex Deletion. Until now, only quadratic time kernelization algorithms are known for this problem. We also give a dynamic graph for Chromatic Number parameterized by the solution size of Cluster Vertex Deletion, and a dynamic graph for bounded-degree Feedback Vertex Set parameterized by the solution size. Assuming the parameter is a constant, each dynamic graph can be updated in O(logn)O(\log n) time and can compute a solution in O(1)O(1) time. These results are obtained by another approach.Comment: SWAT 2014 to appea

    Blame Trees

    Get PDF
    We consider the problem of merging individual text documents, motivated by the single-file merge algorithms of document-based version control systems. Abstracting away the merging of conflicting edits to an external conflict resolution function (possibly implemented by a human), we consider the efficient identification of conflicting regions. We show how to implement tree-based document representation to quickly answer a data structure inspired by the “blame” query of some version control systems. A “blame” query associates every line of a document with the revision in which it was last edited. Our tree uses this idea to quickly identify conflicting edits. We show how to perform a merge operation in time proportional to the sum of the logarithms of the shared regions of the documents, plus the cost of conflict resolution. Our data structure is functional and therefore confluently persistent, allowing arbitrary version DAGs as in real version-control systems. Our results rely on concurrent traversal of two trees with short circuiting when shared subtrees are encountered.United States. Defense Advanced Research Projects Agency (Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program, BAA10-70)United States. Defense Advanced Research Projects Agency (contract #N66001-10-2-4088 (Bridging the Security Gap with Decentralized Information Flow Control))Danish National Research Foundation (Center for Massive Data Algorithmics (MADALGO)

    Cache-Oblivious Persistence

    Full text link
    Partial persistence is a general transformation that takes a data structure and allows queries to be executed on any past state of the structure. The cache-oblivious model is the leading model of a modern multi-level memory hierarchy.We present the first general transformation for making cache-oblivious model data structures partially persistent

    Sampling Theorem and Discrete Fourier Transform on the Riemann Sphere

    Get PDF
    Using coherent-state techniques, we prove a sampling theorem for Majorana's (holomorphic) functions on the Riemann sphere and we provide an exact reconstruction formula as a convolution product of NN samples and a given reconstruction kernel (a sinc-type function). We also discuss the effect of over- and under-sampling. Sample points are roots of unity, a fact which allows explicit inversion formulas for resolution and overlapping kernel operators through the theory of Circulant Matrices and Rectangular Fourier Matrices. The case of band-limited functions on the Riemann sphere, with spins up to JJ, is also considered. The connection with the standard Euler angle picture, in terms of spherical harmonics, is established through a discrete Bargmann transform.Comment: 26 latex pages. Final version published in J. Fourier Anal. App

    Parity-Violating Interaction Effects I: the Longitudinal Asymmetry in pp Elastic Scattering

    Get PDF
    The proton-proton parity-violating longitudinal asymmetry is calculated in the lab-energy range 0--350 MeV, using a number of different, latest-generation strong-interaction potentials--Argonne V18, Bonn-2000, and Nijmegen-I--in combination with a weak-interaction potential consisting of rho- and omega-meson exchanges--the model known as DDH. The complete scattering problem in the presence of parity-conserving, including Coulomb, and parity-violating potentials is solved in both configuration- and momentum-space. The predicted parity-violating asymmetries are found to be only weakly dependent upon the input strong-interaction potential adopted in the calculation. Values for the rho- and omega-meson weak coupling constants hρpph^{pp}_\rho and hωpph^{pp}_\omega are determined by reproducing the measured asymmetries at 13.6 MeV, 45 MeV, and 221 MeV.Comment: 24 pages, 8 figures, submitted to Physical Review

    Transformation elastodynamics and active exterior acoustic cloaking

    Full text link
    This chapter consists of three parts. In the first part we recall the elastodynamic equations under coordinate transformations. The idea is to use coordinate transformations to manipulate waves propagating in an elastic material. Then we study the effect of transformations on a mass-spring network model. The transformed networks can be realized with "torque springs", which are introduced here and are springs with a force proportional to the displacement in a direction other than the direction of the spring terminals. Possible homogenizations of the transformed networks are presented, with potential applications to cloaking. In the second and third parts we present cloaking methods that are based on cancelling an incident field using active devices which are exterior to the cloaked region and that do not generate significant fields far away from the devices. In the second part, the exterior cloaking problem for the Laplace equation is reformulated as the problem of polynomial approximation of analytic functions. An explicit solution is given that allows to cloak larger objects at a fixed distance from the cloaking device, compared to previous explicit solutions. In the third part we consider the active exterior cloaking problem for the Helmholtz equation in 3D. Our method uses the Green's formula and an addition theorem for spherical outgoing waves to design devices that mimic the effect of the single and double layer potentials in Green's formula.Comment: Submitted as a chapter for the volume "Acoustic metamaterials: Negative refraction, imaging, lensing and cloaking", Craster and Guenneau ed., Springe

    Parity Violation in Proton-Proton Scattering

    Full text link
    Measurements of parity-violating longitudinal analyzing powers (normalized asymmetries) in polarized proton-proton scattering provide a unique window on the interplay between the weak and strong interactions between and within hadrons. Several new proton-proton parity violation experiments are presently either being performed or are being prepared for execution in the near future: at TRIUMF at 221 MeV and 450 MeV and at COSY (Kernforschungsanlage Juelich) at 230 MeV and near 1.3 GeV. These experiments are intended to provide stringent constraints on the set of six effective weak meson-nucleon coupling constants, which characterize the weak interaction between hadrons in the energy domain where meson exchange models provide an appropriate description. The 221 MeV is unique in that it selects a single transition amplitude (3P2-1D2) and consequently constrains the weak meson-nucleon coupling constant h_rho{pp}. The TRIUMF 221 MeV proton-proton parity violation experiment is described in some detail. A preliminary result for the longitudinal analyzing power is Az = (1.1 +/-0.4 +/-0.4) x 10^-7. Further proton-proton parity violation experiments are commented on. The anomaly at 6 GeV/c requires that a new multi-GeV proton-proton parity violation experiment be performed.Comment: 13 Pages LaTeX, 5 PostScript figures, uses espcrc1.sty. Invited talk at QULEN97, International Conference on Quark Lepton Nuclear Physics -- Nonperturbative QCD Hadron Physics & Electroweak Nuclear Processes --, Osaka, Japan May 20--23, 199

    Exclusion limits on the WIMP-nucleon cross-section from the Cryogenic Dark Matter Search

    Get PDF
    The Cryogenic Dark Matter Search (CDMS) employs low-temperature Ge and Si detectors to search for Weakly Interacting Massive Particles (WIMPs) via their elastic-scattering interactions with nuclei while discriminating against interactions of background particles. For recoil energies above 10 keV, events due to background photons are rejected with >99.9% efficiency, and surface events are rejected with >95% efficiency. The estimate of the background due to neutrons is based primarily on the observation of multiple-scatter events that should all be neutrons. Data selection is determined primarily by examining calibration data and vetoed events. Resulting efficiencies should be accurate to about 10%. Results of CDMS data from 1998 and 1999 with a relaxed fiducial-volume cut (resulting in 15.8 kg-days exposure on Ge) are consistent with an earlier analysis with a more restrictive fiducial-volume cut. Twenty-three WIMP candidate events are observed, but these events are consistent with a background from neutrons in all ways tested. Resulting limits on the spin-independent WIMP-nucleon elastic-scattering cross-section exclude unexplored parameter space for WIMPs with masses between 10-70 GeV c^{-2}. These limits border, but do not exclude, parameter space allowed by supersymmetry models and accelerator constraints. Results are compatible with some regions reported as allowed at 3-sigma by the annual-modulation measurement of the DAMA collaboration. However, under the assumptions of standard WIMP interactions and a standard halo, the results are incompatible with the DAMA most likely value at >99.9% CL, and are incompatible with the model-independent annual-modulation signal of DAMA at 99.99% CL in the asymptotic limit.Comment: 40 pages, 49 figures (4 in color), submitted to Phys. Rev. D; v.2:clarified conclusions, added content and references based on referee's and readers' comments; v.3: clarified introductory sections, added figure based on referee's comment

    Mycobacterium tuberculosis complex genetic diversity: mining the fourth international spoligotyping database (SpolDB4) for classification, population genetics and epidemiology

    Get PDF
    BACKGROUND: The Direct Repeat locus of the Mycobacterium tuberculosis complex (MTC) is a member of the CRISPR (Clustered regularly interspaced short palindromic repeats) sequences family. Spoligotyping is the widely used PCR-based reverse-hybridization blotting technique that assays the genetic diversity of this locus and is useful both for clinical laboratory, molecular epidemiology, evolutionary and population genetics. It is easy, robust, cheap, and produces highly diverse portable numerical results, as the result of the combination of (1) Unique Events Polymorphism (UEP) (2) Insertion-Sequence-mediated genetic recombination. Genetic convergence, although rare, was also previously demonstrated. Three previous international spoligotype databases had partly revealed the global and local geographical structures of MTC bacilli populations, however, there was a need for the release of a new, more representative and extended, international spoligotyping database. RESULTS: The fourth international spoligotyping database, SpolDB4, describes 1939 shared-types (STs) representative of a total of 39,295 strains from 122 countries, which are tentatively classified into 62 clades/lineages using a mixed expert-based and bioinformatical approach. The SpolDB4 update adds 26 new potentially phylogeographically-specific MTC genotype families. It provides a clearer picture of the current MTC genomes diversity as well as on the relationships between the genetic attributes investigated (spoligotypes) and the infra-species classification and evolutionary history of the species. Indeed, an independent Naïve-Bayes mixture-model analysis has validated main of the previous supervised SpolDB3 classification results, confirming the usefulness of both supervised and unsupervised models as an approach to understand MTC population structure. Updated results on the epidemiological status of spoligotypes, as well as genetic prevalence maps on six main lineages are also shown. Our results suggests the existence of fine geographical genetic clines within MTC populations, that could mirror the passed and present Homo sapiens sapiens demographical and mycobacterial co-evolutionary history whose structure could be further reconstructed and modelled, thereby providing a large-scale conceptual framework of the global TB Epidemiologic Network. CONCLUSION: Our results broaden the knowledge of the global phylogeography of the MTC complex. SpolDB4 should be a very useful tool to better define the identity of a given MTC clinical isolate, and to better analyze the links between its current spreading and previous evolutionary history. The building and mining of extended MTC polymorphic genetic databases is in progress

    What is the Oxygen Isotope Composition of Venus? The Scientific Case for Sample Return from Earth’s “Sister” Planet

    Get PDF
    Venus is Earth’s closest planetary neighbour and both bodies are of similar size and mass. As a consequence, Venus is often described as Earth’s sister planet. But the two worlds have followed very different evolutionary paths, with Earth having benign surface conditions, whereas Venus has a surface temperature of 464 °C and a surface pressure of 92 bar. These inhospitable surface conditions may partially explain why there has been such a dearth of space missions to Venus in recent years.The oxygen isotope composition of Venus is currently unknown. However, this single measurement (Δ17O) would have first order implications for our understanding of how large terrestrial planets are built. Recent isotopic studies indicate that the Solar System is bimodal in composition, divided into a carbonaceous chondrite (CC) group and a non-carbonaceous (NC) group. The CC group probably originated in the outer Solar System and the NC group in the inner Solar System. Venus comprises 41% by mass of the inner Solar System compared to 50% for Earth and only 5% for Mars. Models for building large terrestrial planets, such as Earth and Venus, would be significantly improved by a determination of the Δ17O composition of a returned sample from Venus. This measurement would help constrain the extent of early inner Solar System isotopic homogenisation and help to identify whether the feeding zones of the terrestrial planets were narrow or wide.Determining the Δ17O composition of Venus would also have significant implications for our understanding of how the Moon formed. Recent lunar formation models invoke a high energy impact between the proto-Earth and an inner Solar System-derived impactor body, Theia. The close isotopic similarity between the Earth and Moon is explained by these models as being a consequence of high-temperature, post-impact mixing. However, if Earth and Venus proved to be isotopic clones with respect to Δ17O, this would favour the classic, lower energy, giant impact scenario.We review the surface geology of Venus with the aim of identifying potential terrains that could be targeted by a robotic sample return mission. While the potentially ancient tessera terrains would be of great scientific interest, the need to minimise the influence of venusian weathering favours the sampling of young basaltic plains. In terms of a nominal sample mass, 10 g would be sufficient to undertake a full range of geochemical, isotopic and dating studies. However, it is important that additional material is collected as a legacy sample. As a consequence, a returned sample mass of at least 100 g should be recovered.Two scenarios for robotic sample return missions from Venus are presented, based on previous mission proposals. The most cost effective approach involves a “Grab and Go” strategy, either using a lander and separate orbiter, or possibly just a stand-alone lander. Sample return could also be achieved as part of a more ambitious, extended mission to study the venusian atmosphere. In both scenarios it is critical to obtain a surface atmospheric sample to define the extent of atmosphere-lithosphere oxygen isotopic disequilibrium. Surface sampling would be carried out by multiple techniques (drill, scoop, “vacuum-cleaner” device) to ensure success. Surface operations would take no longer than one hour.Analysis of returned samples would provide a firm basis for assessing similarities and differences between the evolution of Venus, Earth, Mars and smaller bodies such as Vesta. The Solar System provides an important case study in how two almost identical bodies, Earth and Venus, could have had such a divergent evolution. Finally, Venus, with its runaway greenhouse atmosphere, may provide data relevant to the understanding of similar less extreme processes on Earth. Venus is Earth’s planetary twin and deserves to be better studied and understood. In a wider context, analysis of returned samples from Venus would provide data relevant to the study of exoplanetary systems
    corecore